Minimizers of Sparsity Regularized Huber Loss Function

نویسندگان
چکیده

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

A Unified Framework for Consistency of Regularized Loss Minimizers

n = ↵" n, for ↵ 1: L(b ✓ n ) L(✓⇤)  b L n ( b ✓ n ) b L n (✓⇤) + " n, c(b ✓ n ) + " n, c(✓⇤)  n R(b ✓ n ) + n R(✓⇤) + " n, c(b ✓ n ) + " n, c(✓⇤) + ⇠  n r(c(b ✓ n )) + n R(✓⇤) + " n, c(b ✓ n ) + " n, c(✓⇤) + ⇠ = " n, ( ↵r(c(b ✓ n )) + c(b ✓ n )) + " n, (↵R(✓⇤) + c(✓⇤)) + ⇠  " n, ( r(c(b ✓ n )) + c(b ✓ n )) + " n, (↵R(✓⇤) + c(✓⇤)) + ⇠  " n, (↵R(✓⇤) + c(✓⇤)) + ⇠ A.2. Proof of Theorem 2 Proof...

متن کامل

A Unified Framework for Consistency of Regularized Loss Minimizers

We characterize a family of regularized loss minimization problems that satisfy three properties: scaled uniform convergence, super-norm regularization, and norm-loss monotonicity. We show several theoretical guarantees within this framework, including loss consistency, norm consistency, sparsistency (i.e. support recovery) as well as sign consistency. A number of regularization problems can be...

متن کامل

Speed and Sparsity of Regularized Boosting

Boosting algorithms with l1-regularization are of interest because l1 regularization leads to sparser composite classifiers. Moreover, Rosset et al. have shown that for separable data, standard lpregularized loss minimization results in a margin maximizing classifier in the limit as regularization is relaxed. For the case p = 1, we extend these results by obtaining explicit convergence bounds o...

متن کامل

Sparsity-Regularized HMAX for Visual Recognition

About ten years ago, HMAX was proposed as a simple and biologically feasible model for object recognition, based on how the visual cortex processes information. However, the model does not encompass sparse firing, which is a hallmark of neurons at all stages of the visual pathway. The current paper presents an improved model, called sparse HMAX, which integrates sparse firing. This model is abl...

متن کامل

Bounds on the Minimizers of (nonconvex) Regularized Least-Squares

This is a theoretical study on the minimizers of cost-functions composed of an `2 data-fidelity term and a possibly nonsmooth or nonconvex regularization term acting on the differences or the discrete gradients of the image or the signal to restore. More precisely, we derive general nonasymptotic analytical bounds characterizing the local and the global minimizers of these cost-functions. We fi...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Journal of Optimization Theory and Applications

سال: 2020

ISSN: 0022-3239,1573-2878

DOI: 10.1007/s10957-020-01745-3